- The dawn of agentic AI: Are we ready for autonomous technology?
- Update Your iPhone Now to Fix Safari Security Flaw
- Human firewalls: The first line of defense against cyber threats in 2025
- Will Cisco’s Free Tech Training for 1.5M People Help Close EU’s Skills Gap?
- These 3 AI themes dominated SXSW - and here's how they can help you navigate 2025
Nvidia HPC/AI chip is actually six chips

The H200 NVL connects four cards, double the number of its predecessor, the H100 NVL. It also offers the option of liquid cooling; the H100 did not. Instead of using PCIe to communicate, H200 NVL uses NVLink interconnect bridge, which enables a bidirectional throughput of 900 GB/s per GPU, seven times that of PCIe 5.
The H200 NVL is intended for enterprises to accelerate AI and HPC applications, while also improving energy efficiency through reduced power consumption. It has 1.5x more memory and 1.2x more bandwidth over the H100 NVL. For HPC workloads, performance is boosted up to 1.3x over H100 NVL and 2.5x over the NVIDIA Ampere architecture generation.
Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are all expected to deliver a wide range of configurations supporting H200 NVL. Additionally, H200 NVL will be available in platforms from Aivres, ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, MSI, Pegatron, QCT, Wistron and Wiwynn.